411 research outputs found

    Characterising soil moisture in transport corridors using remote sensing

    Get PDF
    This thesis assesses the ability of remote sensing techniques to characterise soil moisture in a transport corridor environment. Much of the world’s transport networks are built on earthwork embankments or in cuttings. In the UK, many of these earthworks were constructed in the mid-19th Century and are susceptible to slope instability. Instability in transport corridors is often triggered by an increase in pore pressure, which is directly influenced by an increase in soil moisture. Although a number of studies have investigated the use of remote sensing techniques for estimating soil moisture, they have tended to be conducted under controlled conditions and few have considered their capacity for being operational. This study addresses this point by exploring the use of high spatial resolution digital elevation models (DEMs) and airborne hyperspectral imagery for characterising soil moisture in transport corridors. A number of terrain (topographic wetness index (TWI), potential solar radiation, aspect) and spectral analysis (red edge position estimation, derivative stress ratios, continuum removal analysis, partial least squares (PLS) regression modelling, mapping biological indicator values) techniques were assessed using terrestrial systems over a test embankment, and airborne data for a transport corridor. The terrain analysis metrics TWI and potential solar radiation were found to be highly sensitive to the DEM spatial interpolation routine used, with a thin plate spline routine performing best in this study. This work also demonstrated that Ellenberg indicator values extended for the UK can be mapped successfully for transport corridor environments, providing potential for a number of different applications. Individually, the techniques were shown to be generally poor predictors of soil moisture. However, an integrated statistical model provided an improved characterisation of soil moisture with a coefficient of determination (R2) of 0.67. Analysis of the model results along with field observations revealed that soil moisture is highly variable over the transport corridor investigated. Soil moisture was shown to increase in a non linear fashion towards the toe of earthwork slopes, while contribution from surrounding fields often led to concentrations of moisture in cutting earthworks. Critically, while these patterns could be captured using the data investigated in this study, such spatial variability is rarely taken into account using analytical slope stability models, potentially raising important challenges in this respect.EThOS - Electronic Theses Online ServiceEngineering and Physical Sciences Research Council (EPSRC)GBUnited Kingdo

    The role of discharge variability in determining alluvial stratigraphy

    Get PDF
    We illustrate the potential for using physics-based modeling to link alluvial stratigraphy to large river morphology and dynamics. Model simulations, validated using ground penetrating radar data from the RĂ­o ParanĂĄ, Argentina, demonstrate a strong relationship between bar-scale set thickness and channel depth, which applies across a wide range of river patterns and bar types. We show that hydrologic regime, indexed by discharge variability and flood duration, exerts a first-order influence on morphodynamics and hence bar set thickness, and that planform morphology alone may be a misleading variable for interpreting deposits. Indeed, our results illustrate that rivers evolving under contrasting hydrologic regimes may have very similar morphology, yet be characterized by marked differences in stratigraphy. This realization represents an important limitation on the application of established theory that links river topography to alluvial deposits, and highlights the need to obtain field evidence of discharge variability when developing paleoenvironmental reconstructions. Model simulations demonstrate the potential for deriving such evidence using metrics of paleocurrent variance

    Geometry of Discrete Quantum Computing

    Full text link
    Conventional quantum computing entails a geometry based on the description of an n-qubit state using 2^{n} infinite precision complex numbers denoting a vector in a Hilbert space. Such numbers are in general uncomputable using any real-world resources, and, if we have the idea of physical law as some kind of computational algorithm of the universe, we would be compelled to alter our descriptions of physics to be consistent with computable numbers. Our purpose here is to examine the geometric implications of using finite fields Fp and finite complexified fields Fp^2 (based on primes p congruent to 3 mod{4}) as the basis for computations in a theory of discrete quantum computing, which would therefore become a computable theory. Because the states of a discrete n-qubit system are in principle enumerable, we are able to determine the proportions of entangled and unentangled states. In particular, we extend the Hopf fibration that defines the irreducible state space of conventional continuous n-qubit theories (which is the complex projective space CP{2^{n}-1}) to an analogous discrete geometry in which the Hopf circle for any n is found to be a discrete set of p+1 points. The tally of unit-length n-qubit states is given, and reduced via the generalized Hopf fibration to DCP{2^{n}-1}, the discrete analog of the complex projective space, which has p^{2^{n}-1} (p-1)\prod_{k=1}^{n-1} (p^{2^{k}}+1) irreducible states. Using a measure of entanglement, the purity, we explore the entanglement features of discrete quantum states and find that the n-qubit states based on the complexified field Fp^2 have p^{n} (p-1)^{n} unentangled states (the product of the tally for a single qubit) with purity 1, and they have p^{n+1}(p-1)(p+1)^{n-1} maximally entangled states with purity zero.Comment: 24 page

    Evaluating a Prioritization Framework for Monitoring Chemicals of Emerging Concern in the Salish Sea Based on Lessons Learned from Western States Programs

    Get PDF
    We are now approaching a tipping point where priority pollutants may no longer be the primary driver of environmental impairment. Contaminants of Emerging Concern (CECs) present a challenge to environmental monitoring and management programs because the rapidly emerging state of the knowledge requires an adaptive and transparent prioritization framework. The state of the science, treatment technologies, and regulatory policies are not well understood, CEC quantification is challenging and expensive, and the management approach is not simply a concentration based criteria, but may include biological end-points. The need for a shared responsibility and leveraging across many programs was evaluated through a series of webinars with other programs studying CECs including Columbia River Toxics Program, Washington Department of Ecology, Oregon Department of Environmental Quality, Southern California Coastal Waters Research Project, and San Francisco Bay Regional Monitoring Program. The lessons learned were articulated into a 10-step prioritization framework. The critical lesson learned included: 1) Develop clear objectives, definitions of CECs, and target audience; 2) Identify conceptual models to provide a clear target for the appropriate media to monitor for various chemicals and at what frequency; 3) Define the chemical characteristics in terms of usage, persistence, bioaccumulation, and toxicity; 4) Develop a target CEC analyte list; 5) Screen and rank the CEC analyte list based on chemical characteristics, environmental concentrations, and state of the science; 6) Create a transparent prioritization process to include input from key stakeholders and end users that builds consensus during development; 7) Prioritize the chemical categories by using specific metrics such as available data, status of analytical methods, available thresholds, costs, programmatic concerns and opportunities for leveraging with other programs; 8) Identify potential biological end-points and other indicators; 9) Create a formal review process to support data and knowledge sharing, adaptively manage prioritization to include new science and critical research gaps; and 10) Develop a working group to facilitate leveraging of funds across many programs

    Non-maximally entangled states: production, characterization and utilization

    Get PDF
    Using a spontaneous-downconversion photon source, we produce true non-maximally entangled states, i.e., without the need for post-selection. The degree and phase of entanglement are readily tunable, and are characterized both by a standard analysis using coincidence minima, and by quantum state tomography of the two-photon state. Using the latter, we experimentally reconstruct the reduced density matrix for the polarization. Finally, we use these states to measure the Hardy fraction, obtaining a result that is 122σ122 \sigma from any local-realistic result.Comment: 4 pages, 4 figures. To appear in Phys. Rev. Let
    • 

    corecore